3,980 research outputs found

    Sensing and describing 3-D structure

    Get PDF
    Discovering the three dimensional structure of an object is important for a variety of robot tasks. Single sensor systems such as machine vision systems cannot reliably compute three dimensional structure in unconstrained environments. Active, exploratory tactile sensing can be used to complement passive stereo vision data to derive robust surface and feature descriptions of objects. The control for tactile sensing is provided by the vision system which provides regions of interest that the tactile system can explore. The descriptions of surfaces and features are accurate and can be used in a later matching phase against a model data base of objects to identify the object and its position and orientation in space

    Mapping haptic exploratory procedures to multiple shape representations

    Get PDF
    Research in human haptics has revealed a number of exploratory procedures (EPs) that are used in determining attributes on an object, particularly shape. This research has been used as a paradigm for building an intelligent robotic system that can perform shape recognition from touch sensing. In particular, a number of mappings between EPs and shape modeling primitives have been found. The choice of shape primitive for each EP is discussed, and results from experiments with a Utah-MIT dextrous hand system are presented. A vision algorithm to complement active touch sensing for the task of autonomous shape recovery is also presented

    Constraint-based sensor planning for scene modeling

    Get PDF
    We describe an automated scene modeling system that consists of two components operating in an interleaved fashion: an incremental modeler that builds solid models from range imagery; and a sensor planner that analyzes the resulting model and computes the next sensor position. This planning component is target-driven and computes sensor positions using model information about the imaged surfaces and the unexplored space in a scene. The method is shape-independent and uses a continuous-space representation that preserves the accuracy of sensed data. It is able to completely acquire a scene by repeatedly planning sensor positions, utilizing a partial model to determine volumes of visibility for contiguous areas of unexplored scene. These visibility volumes are combined with sensor placement constraints to compute sets of occlusion-free sensor positions that are guaranteed to improve the quality of the model. We show results for the acquisition of a scene that includes multiple, distinct objects with high occlusion

    Visual control of grasping and manipulation tasks

    Get PDF
    This paper discusses the problem of visual control of grasping. We have implemented an object tracking system that can be used to provide visual feedback for locating the positions of fingers and objects to be manipulated, as well as the relative relationships of them. This visual analysis can be used to control open loop grasping systems in a number of manipulation tasks where the finger contact, object movement, and task completion need to be monitored and controlled

    Robot learning of everyday object manipulations via human demonstration

    Get PDF
    We deal with the problem of teaching a robot to manipulate everyday objects through human demonstration. We first design a task descriptor which encapsulates important elements of a task. The design originates from observations that manipulations involved in many everyday object tasks can be considered as a series of sequential rotations and translations, which we call manipulation primitives. We then propose a method that enables a robot to decompose a demonstrated task into sequential manipulation primitives and construct a task descriptor. We also show how to transfer a task descriptor learned from one object to similar objects. In the end, we argue that this framework is highly generic. Particularly, it can be used to construct a robot task database that serves as a manipulation knowledge base for a robot to succeed in manipulating everyday objects

    Interactive sensor planning

    Get PDF
    This paper describes an interactive sensor planning system, that can be used to select viewpoints subject to camera visibility, field of view and task constraints. Application areas for this method include surveillance planning, safety monitoring, architectural site design planning, and automated site modeling. Given a description, of the sensor's characteristics, the objects in the 3-D scene, and the targets to be viewed, our algorithms compute the set of admissible view points that satisfy the constraints. The system first builds topologically correct solid models of the scene from a variety of data sources. Viewing targets are then selected, and visibility volumes and field of view cones are computed and intersected to create viewing volumes where cameras can be placed. The user can interactively manipulate the scene and select multiple target features to be viewed by a camera. The user can also select candidate viewpoints within this volume to synthesize views and verify the correctness of the planning system. We present experimental results for the planning system on an actual complex city model
    • …
    corecore